ai explainability
Increasing AI Explainability by LLM Driven Standard Processes
This paper introduces an approach to increasing the explainability of artificial intelligence (AI) systems by embedding Large Language Models (LLMs) within standardized analytical processes. While traditional explainable AI (XAI) methods focus on feature attribution or post-hoc interpretation, the proposed framework integrates LLMs into defined decision models such as Question-Option-Criteria (QOC), Sensitivity Analysis, Game Theory, and Risk Management. By situating LLM reasoning within these formal structures, the approach transforms opaque inference into transparent and auditable decision traces. A layered architecture is presented that separates the reasoning space of the LLM from the explainable process space above it. Empirical evaluations show that the system can reproduce human-level decision logic in decentralized governance, systems analysis, and strategic reasoning contexts. The results suggest that LLM-driven standard processes provide a foundation for reliable, interpretable, and verifiable AI-supported decision making.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- Europe > Sweden > Vaestra Goetaland > Gothenburg (0.04)
- Europe > Germany (0.04)
- Asia > China (0.04)
- Leisure & Entertainment > Games (0.50)
- Information Technology > Security & Privacy (0.35)
Green LIME: Improving AI Explainability through Design of Experiments
Stadler, Alexandra, Müller, Werner G., Harman, Radoslav
In artificial intelligence (AI), the complexity of many models and processes often surpasses human interpretability, making it challenging to understand why a specific prediction is made. This lack of transparency is particularly problematic in critical fields like healthcare, where trust in a model's predictions is paramount. As a result, the explainability of machine learning (ML) and other complex models has become a key area of focus. Efforts to improve model interpretability often involve experimenting with AI systems and approximating their behavior through simpler mechanisms. However, these procedures can be resource-intensive. Optimal design of experiments, which seeks to maximize the information obtained from a limited number of observations, offers promising methods for improving the efficiency of these explainability techniques. To demonstrate this potential, we explore Local Interpretable Model-agnostic Explanations (LIME), a widely used method introduced by Ribeiro, Singh, and Guestrin, 2016. LIME provides explanations by generating new data points near the instance of interest and passing them through the model. While effective, this process can be computationally expensive, especially when predictions are costly or require many samples. LIME is highly versatile and can be applied to a wide range of models and datasets. In this work, we focus on models involving tabular data, regression tasks, and linear models as interpretable local approximations. By utilizing optimal design of experiments' techniques, we reduce the number of function evaluations of the complex model, thereby reducing the computational effort of LIME by a significant amount. We consider this modified version of LIME to be energy-efficient or "green".
Towards a Praxis for Intercultural Ethics in Explainable AI
Other research has also noted the concentration of AI research and development within the Western world [13, 48] and how AI systems primarily embed the cultural values and practices of people within these respective regions, alienating certain groups of users and causing cultural harm [52]. While the development, use, and implementation of AI has transcended borders, a limited amount of work focuses on democratizing the concept of explainable AI to the "majority world" [4], leaving much room to explore and develop new approaches within this space that cater to the distinct needs of users within this region. This article introduces the concept of an intercultural ethics approach to AI explainability. It examines how cultural nuances impact the adoption and use of technology, the factors that impede how technical concepts such as AI are explained, and how integrating an intercultural ethics approach in the development of XAI can improve user understanding and facilitate efficient usage of these methods. We first discuss what it means to explain, reviewing relevant literature within explainable AI and introducing intercultural ethics. We then highlight barriers that impact explainable AI. Next, we introduce the concept of using intercultural ethics to inform AI explainability, outlining steps for researchers interested in leveraging this approach. We conclude the paper by reflecting on the prospect of an intercultural ethics approach to XAI, the limitations of such an approach, and potential areas to build upon this work.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.15)
- Asia > India (0.04)
- Asia > Bangladesh (0.04)
- (11 more...)
- Questionnaire & Opinion Survey (0.68)
- Research Report (0.64)
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
Nannini, Luca, Balayn, Agathe, Smith, Adam Leon
Public attention towards explainability of artificial intelligence (AI) systems has been rising in recent years to offer methodologies for human oversight. This has translated into the proliferation of research outputs, such as from Explainable AI, to enhance transparency and control for system debugging and monitoring, and intelligibility of system process and output for user services. Yet, such outputs are difficult to adopt on a practical level due to a lack of a common regulatory baseline, and the contextual nature of explanations. Governmental policies are now attempting to tackle such exigence, however it remains unclear to what extent published communications, regulations, and standards adopt an informed perspective to support research, industry, and civil interests. In this study, we perform the first thematic and gap analysis of this plethora of policies and standards on explainability in the EU, US, and UK. Through a rigorous survey of policy documents, we first contribute an overview of governmental regulatory trajectories within AI explainability and its sociotechnical impacts. We find that policies are often informed by coarse notions and requirements for explanations. This might be due to the willingness to conciliate explanations foremost as a risk management tool for AI oversight, but also due to the lack of a consensus on what constitutes a valid algorithmic explanation, and how feasible the implementation and deployment of such explanations are across stakeholders of an organization. Informed by AI explainability research, we conduct a gap analysis of existing policies, leading us to formulate a set of recommendations on how to address explainability in regulations for AI systems, especially discussing the definition, feasibility, and usability of explanations, as well as allocating accountability to explanation providers.
- Europe > United Kingdom (0.68)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Austria > Vienna (0.14)
- (17 more...)
- Overview (0.92)
- Research Report > New Finding (0.34)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Regional Government > Europe Government (0.94)
AI Explainability at the IHM Conference 2022 at UNamur: Misdirection of XAI from technical solutions to user adaptation
On the first day, I attended the workshop on AI Explainability that brought together researchers from both the HCI and Computer Science communities. The workshop was opened by UNamur professors Bruno Dumas, specializing in HCI, and Benoît Frénay who works on Machine Learning. Dr Frénay presented the XAI research field and the interdisciplinary research being conducted at UNamur on this topic. He pointed out the lack of a user-centered approach in the XAI machine learning community where less than 1% of accepted papers in major conferences, such as NeurIPS, test their XAI methods with user studies. The rest of the morning was devoted to the presentation of eight abstracts, including mine, related to XAI research with either a computer science or HCI angle.
Using Explainable AI in Decision-Making Applications
There is no instruction for a decision-making process. However, important decisions are usually made by analyzing tons of data to find the optimal way to solve a problem. That's where we truly rely on logic and deduction. That's why surgeons dig into anamnesis, or businesses gather key persons to see a bigger picture before making a turn. Relying on AI decision-making can significantly reduce the time spent on research and data gathering.
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Banking & Finance > Trading (0.96)
What are data scientists' biggest concerns? The 2022 State of Data Science report has the answers
To further strengthen our commitment to providing industry-leading coverage of data technology, VentureBeat is excited to welcome Andrew Brust and Tony Baer as regular contributors. Data science is a quickly growing technology as organizations of all sizes embrace artificial intelligence (AI) and machine learning (ML), and along with that growth has come no shortage of concerns. The 2022 State of Data Science report, released today by data science platform vendor Anaconda, identifies key trends and concerns for data scientists and the organizations that employ them. Among the trends identified by Anaconda is the fact that the open-source Python programming language continues to dominate the data science landscape. Among the key concerns identified in the report was the barriers to adoption of data science overall.
The Explainable AI Imperative Amid Global AI Regulation
The General Data Protection Regulation (GDPR) was a big first step toward giving consumers control of their data. As powerful as this privacy initiative is, a new personal data challenge has emerged. Now, privacy concerns are focused on what companies are doing with data once they have it. This is due to the rise of artificial intelligence (AI) as neural networks accelerate the exploitation of personal data and raise new questions about the need for further regulation and safeguarding of privacy rights. Core to the concern about data privacy are the algorithms used to develop AI models.
- Europe (0.30)
- North America > United States (0.05)
- Asia > China (0.05)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
EETimes - AI in Automotive: Current and Future Impact
AI is neither artificial, nor is it intelligent. AI cannot recognize things without extensive human training. AI exhibits completely different logic from humans in terms of recognizing, understanding and classifying objects or scenes. The label implies that AI is analogous to human intelligence. AI often lacks any semblance of common sense, can be easily fooled or corrupted and can fail in unexpected and unpredictable ways.
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Automobiles & Trucks (1.00)
- Law > Statutes (0.71)
Solving the Problem of Bias in Artificial Intelligence
Back in 2018, the American Civil Liberties Union found out that Amazon's Rekognition, face surveillance technology used by police and courting departments across the US, shows AI bias. During the test, the software incorrectly matched 28 members of Congress with the mugshots of people who have been arrested for committing a crime, and 40% of the false matches were people of color. Following mass protests wherein Amazon's employees refused to contribute to AI tools that reproduce facial recognition bias, the tech giant has announced a one-year moratorium on law enforcement agencies using the platform. The incident has stirred new debate about bias in artificial intelligence algorithms and made companies search for new solutions to the AI bias paradox. In this article, we'll dot the i's, zooming in on the concept, root causes, types, and ethical implications of AI bias, as well as list practical debiasing techniques shared by our AI consultants that worth including in your AI strategy.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Information Technology (1.00)